Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function. To implement a practical fully 3D PET image reconstruction, which could not be performed due to a graphics processing unit memory limitation, we modify the DIP optimization to block-iteration and sequentially learn an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term was added to the loss function to enhance the quantitative PET image accuracy. We evaluated our proposed method using Monte Carlo simulation with [$^{18}$F]FDG PET data of a human brain and a preclinical study on monkey brain [$^{18}$F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximum-a-posterior EM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that the proposed method improved the PET image quality by reducing statistical noise and preserved a contrast of brain structures and inserted tumor compared with other algorithms. In the preclinical experiment, finer structures and better contrast recovery were obtained by the proposed method. This indicated that the proposed method can produce high-quality images without a prior training dataset. Thus, the proposed method is a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.
translated by 谷歌翻译
This article presents our generative model for rhythm action games together with applications in business operations. Rhythm action games are video games in which the player is challenged to issue commands at the right timings during a music session. The timings are rendered in the chart, which consists of visual symbols, called notes, flying through the screen. We introduce our deep generative model, Gen\'eLive!, which outperforms the state-of-the-art model by taking into account musical structures through beats and temporal scales. Thanks to its favorable performance, Gen\'eLive! was put into operation at KLab Inc., a Japan-based video game developer, and reduced the business cost of chart generation by as much as half. The application target included the phenomenal "Love Live!," which has more than 10 million users across Asia and beyond, and is one of the few rhythm action franchises that has led the online era of the genre. In this article, we evaluate the generative performance of Gen\'eLive! using production datasets at KLab as well as open datasets for reproducibility, while the model continues to operate in their business. Our code and the model, tuned and trained using a supercomputer, are publicly available.
translated by 谷歌翻译
可以使用成像型大气Cherenkov望远镜(IACTS)来检测由高能量颗粒产生的广泛的空气淋浴,可以使用成像氛围来检测。可以分析IACT图像以区分由伽马射线和HADRONs引起的事件,并推断出事件的参数,例如初级粒子的能量。我们使用卷积神经网络(CNNS)从TAIGA实验的望远镜分析蒙特卡罗模拟图像。该分析包括对对应于由伽马射线引起的淋浴和估计伽马射线的能量的图像的选择。我们使用来自两个望远镜的图像和CNNS使用来自两个望远镜的图像的图像进行比较CNN的性能。
translated by 谷歌翻译
Taiga是一个用于伽马射线天文学的混合天文台,其高能量范围为10 TEV至几个EEV。它由Taiga-ICACT,Taiga-Hiscore等仪器组成。特别是Taiga-Hiscore是一系列广角时序Cherenkov Light站。 Taiga-Hiscore数据使重建空气淋浴特性,例如空气淋浴能量,到达方向和轴坐标。在本报告中,我们建议考虑使用卷积神经网络的空气阵雨特征决定。我们使用卷积神经网络(CNN)来分析HERCORE事件,如图像处理它们。为此,使用在HERSCORE站记录的事件的时间和幅度。这项工作讨论了一个简单的卷积神经网络及其培训。此外,我们提出了一些初步结果对空气淋浴的参数,例如淋浴轴的方向和位置和初级颗粒的能量,并将它们与通过传统方法获得的结果进行比较。
translated by 谷歌翻译